- I replaced my desktop with this MSI laptop for a week, and it surpassed my expectations
- AI networking a focus of HPE’s Juniper deal as Justice Department concerns swirl
- 3 reasons why you need noise-canceling earbuds ahead of the holidays (and which models to buy)
- Unlocking the Future Through the Industrial Strategy: A Policy Blueprint for the UK's Digital Transformation
- Your power bank is lying to you about its capacity - sort of
Some Generative AI Company Employees Pen Letter Wanting ‘Right to Warn’ About Risks
Some current and former employees of OpenAI, Google DeepMind and Anthropic published a letter on June 4 asking for whistleblower protections, more open dialogue about risks and “a culture of open criticism” in the major generative AI companies.
The Right to Warn letter illuminates some of the inner workings of the few high-profile companies that sit in the generative AI spotlight. OpenAI holds a distinct status as a nonprofit trying to “navigate massive risks” of theoretical “general” AI.
For businesses, the letter comes at a time of increasing pushes for adoption of generative AI tools; it also reminds technology decision-makers of the importance of strong policies around the use of AI.
Right to Warn letter asks frontier AI companies not to retaliate against whistleblowers and more
The demands are:
- For advanced AI companies not to enforce agreements that prevent “disparagement” of those companies.
- Creation of an anonymous, approved path for employees to express concerns about risk to the companies, regulators or independent organizations.
- Support for “a culture of open criticism” in regards to risk, with allowances for trade secrets.
- An end to whistleblower retaliation.
The letter comes about two weeks after an internal shuffle at OpenAI revealed restrictive nondisclosure agreements for departing employees. Allegedly, breaking the non-disclosure and non-disparagement agreement could forfeit employees’ rights to their vested equity in the company, which could far outweigh their salaries. On May 18, OpenAI CEO Sam Altman said on X that he was “embarrassed” by the potential for withdrawing employees’ vested equity and that the agreement would be changed.
Of the OpenAI employees who signed the Right to Warn letter, all current workers contributed anonymously.
What potential dangers of generative AI does the letter address?
The open letter addresses potential dangers from generative AI, naming risks that “range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”
OpenAI’s stated purpose has, since its inception, been to both create and safeguard artificial general intelligence, sometimes called general AI. AGI means theoretical AI that is smarter or more capable than humans, which is a definition that conjures up science-fiction images of murderous machines and humans as second-class citizens. Some critics of AI call these fears a distraction from more pressing concerns at the intersection of technology and culture, such as the theft of creative work. The letter writers mention both existential and social threats.
How might caution from inside the tech industry affect what AI tools are available to enterprises?
Companies that are not frontier AI companies but may be deciding how to move forward with generative AI could take this letter as a moment to consider their AI usage policies, their security and reliability vetting around AI products and their process of data provenance when using generative AI.
SEE: Organizations should carefully consider an AI ethics policy customized to their business goals.
Juliette Powell, co-author of “The AI Dilemma” and New York University professor on the ethics of artificial intelligence and machine learning, has studied the results of protests by employees against corporate practices for years.
“Open letters of caution from employees alone don’t amount to much without the support of the public, who have a few more mechanisms of power when combined with those of the press,” she said in an email to TechRepublic. For example, Powell said, writing op-eds, putting public pressure on companies’ boards or withholding investments in frontier AI companies might be more effective than signing an open letter.
Powell referred to last year’s request for a six month pause on the development of AI as another example of a letter of this type.
“I think the chance of big tech agreeing to the terms of these letters – AND ENFORCING THEM – are about as probable as computer and systems engineers being held accountable for what they built in the way that a structural engineer, a mechanical engineer or an electrical engineer would be,” Powell said. “Thus, I don’t see a letter like this affecting the availability or use of AI tools for business/enterprise.”
OpenAI has always included the recognition of risk in its pursuit of more and more capable generative AI, so it’s possible this letter comes at a time when many businesses have already weighed the pros and cons of using generative AI products for themselves. Conversations within organizations about AI usage policies could embrace the “culture of open criticism” policy. Business leaders could consider enforcing protections for employees who discuss potential risks, or choosing to invest only in AI products they find to have a responsible ecosystem of social, ethical and data governance.